-
-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Embed: add tests for glossary and citations and foot notes #7986
Conversation
I wonder if there is a way to have the tests generate these HTML files with Sphinx, since I imagine this kind of stuff will vary across sphinx version? It would be awesome to have a matrix of support along the axises where this kind of stuff changes. |
@ericholscher so, right now what we check is that every section is interpreted correctly by the embed app, that means having the original html file and the html of each section. We can't generate the html of each section automatically since that's what we are testing. For testing mkdocs across different versions/themes for the search app I just created another version in the tests (e.g. bibtet-material, bibtext-basic, etc), so we can add tests for more versions and themes, but we still need to "write" the expected html of the section "by hand" (this is actually just a copy/paste/replace thing) |
btw, what it takes most of the time is just finding a live example with what to test :) |
@stsewd not sure I follow. Presumably this HTML is generated from Sphinx, so can't we generate it? |
@ericholscher I get this html from sphinx to test it. But we can't automate this, since we need to test that for given a html page (input) we get the expected html sections (output). So, currently we have a html page (input) and multiple pages representing each section (output). We test that the sections match what we expect input->sections = section from output. If you are talking about automating only the input page, we can, but that will only work with one version (the current we have installed), and we also need to provide a full project that generates the html we want to test, right now we can get the html from only the pages we want to test from any project. So, it is more easy to just copy the generated html from a real project. |
@ericholscher @stsewd I'm doing this in sphinx-notfound-page (https://github.com/readthedocs/sphinx-notfound-page/blob/master/tests/test_urls.py#L46) and sphinx-hoverxref (https://github.com/readthedocs/sphinx-hoverxref/blob/master/tests/test_htmltag.py#L34). There are different "project examples" that we use multiple times to test different things. The test itself builds the Sphinx project and generates the HTML. Then the test uses that HTML generated as input and check it contains what we are looking for. For the Embed API, we could do the same and mock the storage to return the "just generated HTML" so the API uses it as a source to parse it and return the correct chunk of code (that we should have hard-coded in our tests) |
For sphinx we don't use the html directly, but the metadata from the jsonf files. And for mkdocs we would have to do a system call since it doesn't have that kind of integration as sphinx. Feels complicated just to generate the input html instead of just copying it (and seeing the differences from the expected html would be hard since we would have |
In fact, due the reasons you are mentioning here, I'm thinking that it may be good to have the embed code in a different repository then. In particular to be able to run tests across different Sphinx versions --if that's not possible to do it with the current tox setup. Harcoding the input is not bad, but definitely we are not testing this code completely. Different Sphinx versions could generate different HTML/jsonf (even with the same version using HTML4 or HTML5). We don't need to "generate the expected output", that's fine to be hardcoded for each different version if they are not equal for some reason. I think it's better to have Sphinx tests' using the correct way and MkDocs tests' hacked, than both hacked. |
We can test that too, we just need to copy the generated output. I don't think we should move the repo just to change how the testing is done. Don't think using the sphinx helper improves anything, is more explicit having html as input rather than rst. |
I think what we want is:
The primary thing I'm looking for here is we want to ensure new versions of Sphinx don't break our API responses, and code changes don't break old versions of Sphinx. If we are hard coding both the input & output, we aren't actually testing Sphinx generation across versions and extensions, which is a major issue we're trying to address with this these tests. We want to test specific rst input across versions of sphinx generates HTML we properly parse. There is also a larger question here which is: when the HTML changes across versions, do we want to pass that HTML change along to embed API clients? I can see both sides to this, and it should be addressed in our design doc for v3 👍 |
0d04786
to
8a0078d
Compare
@@ -15,7 +15,7 @@ | |||
api_footer_urls = [ | |||
url(r'footer_html/', ProxiedFooterHTML.as_view(), name='footer_html'), | |||
url(r'search/$', ProxiedPageSearchAPIView.as_view(), name='search_api'), | |||
url(r'embed/', ProxiedEmbedAPI.as_view(), name='embed_api'), | |||
url(r'embed/', ProxiedEmbedAPI.as_view(), name='api_embed'), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this needs to match the name from the "normal" api. I can rename the other one instead if we want.
8a0078d
to
da5eff4
Compare
Okay, so I added tests generated from the sphinx output:
Things I noticed:
So, I'm a little biased here bc I like the pure html tests more... but I don't see benefits from using source files here as the html structure hasn't changed for all those versions. What I can see changing more is the output from multiple versions of extensions/themes. And even if the html changed, it could be maybe white space changes. So I think we should focus more on the different structures that are generated from different themes/extensions and make our parsing code work on those structures in a general way (like how sections or definition lists are organized) and of course this is more easy to test with pure html as input instead of having a large matrix of themes and extensions. Plus, no need for another setup to run these tests. I left the old tests around since they are already written and cover more things, we can delete them if we move fully to the other type of tests. |
Think I'm actually testing with the same sphinx version, I'll try something tomorrow Edit: nope, all good. |
@stsewd I don't quite understand why we aren't using tox to just run the full test suite against each version? That seems like the most explicit way to do this. Oh, because it's diff than the RTD tests. It feels like we're reaching a point where we might be better off having the embed code in a separate repo, if it's causing this much headache. |
I think we came down with:
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
I think this PR can be closed now we implemented EmbedAPIv3. |
The code handling
<dt>
elements can be done in a more general way, but I'll change that in another PR, first wanted to add tests with the current behavior